227 research outputs found
Endocytic trafficking is required for neuron cell death through regulating TGF-beta signaling in \u3ci\u3eDrosophila melanogaster\u3c/i\u3e
Programmed cell death (PCD) is an essential feature during the development of the central nervous system in Drosophila as well as in mammals. During metamorphosis, a group of peptidergic neurons (vCrz) are eliminated from the larval central nervous system (CNS) via PCD within 6-7 h after puparium formation. To better understand this process, we first characterized the development of the vCrz neurons including their lineages and birth windows using the MARCM (Mosaic Analysis with a Repressible Cell Marker) assay. Further genetic and MARCM analyses showed that not only Myoglianin (Myo) and its type I receptor Baboon is required for neuron cell death, but also this death signal is extensively regulated by endocytic trafficking in Drosophila melanogaster. We found that clathrin-mediated membrane receptor internalization and subsequent endocytic events involved in Rab5-dependent early endosome and Rab11-dependent recycling endosome differentially participate in TGF-β [beta] signaling. Two early endosome-enriched proteins, SARA and Hrs, are found to act as a cytosolic retention factor of Smad2, indicating that endocytosis mediates TGF-β [beta] signaling through regulating the dissociation of Smad2 and its cytosolic retention factor
AnyPose: Anytime 3D Human Pose Forecasting via Neural Ordinary Differential Equations
Anytime 3D human pose forecasting is crucial to synchronous real-world
human-machine interaction, where the term ``anytime" corresponds to predicting
human pose at any real-valued time step. However, to the best of our knowledge,
all the existing methods in human pose forecasting perform predictions at
preset, discrete time intervals. Therefore, we introduce AnyPose, a lightweight
continuous-time neural architecture that models human behavior dynamics with
neural ordinary differential equations. We validate our framework on the
Human3.6M, AMASS, and 3DPW dataset and conduct a series of comprehensive
analyses towards comparison with existing methods and the intersection of human
pose and neural ordinary differential equations. Our results demonstrate that
AnyPose exhibits high-performance accuracy in predicting future poses and takes
significantly lower computational time than traditional methods in solving
anytime prediction tasks
DeRi-Bot: Learning to Collaboratively Manipulate Rigid Objects via Deformable Objects
Recent research efforts have yielded significant advancements in manipulating
objects under homogeneous settings where the robot is required to either
manipulate rigid or deformable (soft) objects. However, the manipulation under
heterogeneous setups that involve both rigid and one-dimensional (1D)
deformable objects remains an unexplored area of research. Such setups are
common in various scenarios that involve the transportation of heavy objects
via ropes, e.g., on factory floors, at disaster sites, and in forestry. To
address this challenge, we introduce DeRi-Bot, the first framework that enables
the collaborative manipulation of rigid objects with deformable objects. Our
framework comprises an Action Prediction Network (APN) and a Configuration
Prediction Network (CPN) to model the complex pattern and stochasticity of
soft-rigid body systems. We demonstrate the effectiveness of DeRi-Bot in moving
rigid objects to a target position with ropes connected to robotic arms.
Furthermore, DeRi-Bot is a distributive method that can accommodate an
arbitrary number of robots or human partners without reconfiguration or
retraining. We evaluate our framework in both simulated and real-world
environments and show that it achieves promising results with strong
generalization across different types of objects and multi-agent settings,
including human-robot collaboration.Comment: This paper has been accepted by IEEE RA-
DeRi-IGP: Manipulating Rigid Objects Using Deformable Objects via Iterative Grasp-Pull
Heterogeneous systems manipulation, i.e., manipulating rigid objects via
deformable (soft) objects, is an emerging field that remains in its early
stages of research. Existing works in this field suffer from limited action and
operational space, poor generalization ability, and expensive development. To
address these challenges, we propose a universally applicable and effective
moving primitive, Iterative Grasp-Pull (IGP), and a sample-based framework,
DeRi-IGP, to solve the heterogeneous system manipulation task. The DeRi-IGP
framework uses local onboard robots' RGBD sensors to observe the environment,
comprising a soft-rigid body system. It then uses this information to
iteratively grasp and pull a soft body (e.g., rope) to move the attached rigid
body to a desired location. We evaluate the effectiveness of our framework in
solving various heterogeneous manipulation tasks and compare its performance
with several state-of-the-art baselines. The result shows that DeRi-IGP
outperforms other methods by a significant margin. In addition, we also
demonstrate the advantage of the large operational space of IGP in the
long-distance object acquisition task within both simulated and real
environments
INFLUENCE NETWORK ANALYSIS ON SOCIAL NETWORK AND CRITICAL INFRASTRUCTURE INTERDEPENDENCIES
Inspired by the social network, the influence network has been proved to be a powerful tool to analyze the influence propagation within a group of entities. An introduction of the topic is given in Chapter 1. In Chapter 2 of the thesis, a brief survey on some major results on the single-layer influence network analysis is presented, and we propose a new multi-layer influence network framework. In Chapters 3 and 4, we give two applications of the single-layer influence network, on social networks and cellular base station interdependency networks. In Chapter 5, we propose a new multi-layer linear threshold influence network to analyze the interdependency of critical infrastructure sectors in metro Atlanta and Florida. Summary and conclusions are presented in Chapter 6.Ph.D
A Review of Intrusion Detection Technology Based on Deep Rein-forcement Learning
With the rapid development of modern science and technology, all kinds of network attacks are updated constantly. Therefore, the traditional network security defense mechanism needs to be further improved. Through extensive investigation, this paper presents the latest work of network intrusion detection technology based on deep learning. Firstly, this paper introduces the related concepts of network intrusion detection technology. On this basis, we further evaluate the performance of three common deep learning models in intrusion detection, and conclude that DBN algorithm has some strong advantages. Afterwards, it also puts forward several improvement strategies of intrusion detection models
ECO: Egocentric Cognitive Mapping
We present a new method to localize a camera within a previously unseen
environment perceived from an egocentric point of view. Although this is, in
general, an ill-posed problem, humans can effortlessly and efficiently
determine their relative location and orientation and navigate into a
previously unseen environments, e.g., finding a specific item in a new grocery
store. To enable such a capability, we design a new egocentric representation,
which we call ECO (Egocentric COgnitive map). ECO is biologically inspired, by
the cognitive map that allows human navigation, and it encodes the surrounding
visual semantics with respect to both distance and orientation. ECO possesses
three main properties: (1) reconfigurability: complex semantics and geometry is
captured via the synthesis of atomic visual representations (e.g., image
patch); (2) robustness: the visual semantics are registered in a geometrically
consistent way (e.g., aligning with respect to the gravity vector,
frontalizing, and rescaling to canonical depth), thus enabling us to learn
meaningful atomic representations; (3) adaptability: a domain adaptation
framework is designed to generalize the learned representation without manual
calibration. As a proof-of-concept, we use ECO to localize a camera within
real-world scenes---various grocery stores---and demonstrate performance
improvements when compared to existing semantic localization approaches
Attention-enhanced connectionist temporal classification for discrete speech emotion recognition
Discrete speech emotion recognition (SER), the assignment of a single emotion label to an entire speech utterance, is typically performed as a sequence-to-label task. This approach, however, is limited, in that it can result in models that do not capture temporal changes in the speech signal, including those indicative of a particular emotion. One potential solution to overcome this limitation is to model SER as a sequence-to-sequence task instead. In this regard, we have developed an attention-based bidirectional long short-term memory (BLSTM) neural network in combination with a connectionist temporal classification (CTC) objective function (Attention-BLSTM-CTC) for SER. We also assessed the benefits of incorporating two contemporary attention mechanisms, namely component attention and quantum attention, into the CTC framework. To the best of the authors’ knowledge, this is the first time that such a hybrid architecture has been employed for SER.We demonstrated the effectiveness of our approach on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) and FAU-Aibo Emotion corpora. The experimental results demonstrate that our proposed model outperforms current state-of-the-art approaches.The work presented in this paper substantially supported by the National Natural Science Foundation of China (Grant No. 61702370), the Key Program of the Natural Science Foundation of Tianjin (Grant No. 18JCZDJC36300), the Open Projects Program of the National Laboratory of Pattern Recognition, and the Senior Visiting Scholar Program of Tianjin Normal University.
Interspeech 2019
ISSN: 1990-977
Exploring Spatio-Temporal Representations by Integrating Attention-based Bidirectional-LSTM-RNNs and FCNs for Speech Emotion Recognition
Automatic emotion recognition from speech, which is an important and challenging task in the field of affective computing, heavily relies on the effectiveness of the speech features for classification. Previous approaches to emotion recognition have mostly focused on the extraction of carefully hand-crafted features. How to model spatio-temporal dynamics for speech emotion recognition effectively is still under active investigation. In this paper, we propose a method to tackle the problem of emotional relevant feature extraction from speech by leveraging Attention-based Bidirectional Long Short-Term Memory Recurrent Neural Networks with fully convolutional networks in order to automatically learn the best spatio-temporal representations of speech signals. The learned high-level features are then fed into a deep neural network (DNN) to predict the final emotion. The experimental results on the Chinese Natural Audio-Visual Emotion Database (CHEAVD) and the Interactive Emotional Dyadic Motion Capture (IEMOCAP) corpora show that our method provides more accurate predictions compared with other existing emotion recognition algorithms
- …